A Framework for Textbook Enhancement and Learning using Crowd-sourced Annotations

نویسندگان

  • Anamika Chhabra
  • Sudarshan Iyengar
  • Poonam Saini
  • Rajesh Shreedhar Bhat
چکیده

Despite a significant improvement in the educational aids in terms of effective teaching-learning process, most of the educational content available to the students is less than optimal in the context of being up-to-date, exhaustive and easy-to-understand. There is a need to iteratively improve the educational material based on the feedback collected from the students’ learning experience. This can be achieved by observing the students’ interactions with the content, and then having the authors modify it based on this feedback. Hence, we aim to facilitate and promote communication between the communities of authors, instructors and students in order to gradually improve the educational material. Such a system will also help in students’ learning process by encouraging student-to-student teaching. Underpinning these objectives, we provide the framework of a platform named Crowdsourced Annotation System (CAS) where the people from these communities can collaborate and benefit from each other. We use the concept of in-context annotations, through which, the students can add their comments about the given text while learning it. An experiment was conducted on 60 students who try to learn an article of a textbook by annotating it for four days. According to the result of the experiment, most of the students were highly satisfied with the use of CAS. They stated that the system is extremely useful for learning and they would like to use it for learning other concepts in future.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Clickstream analysis for crowd-based object segmentation with confidence

With the rapidly increasing interest in machine learning based solutions for automatic image annotation, the availability of reference annotations for algorithm training is one of the major bottlenecks in the field. Crowdsourcing has evolved as a valuable option for low-cost and large-scale data annotation; however, quality control remains a major issue which needs to be addressed. To our knowl...

متن کامل

Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd.

The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in com- p...

متن کامل

Reasoning on Crowd-Sourced Semantic Annotations to Facilitate Cataloguing of 3D Artefacts in the Cultural Heritage Domain

The 3D Semantic Annotation (3DSA) system expedites the classification of 3D digital surrogates from the cultural heritage domain, by leveraging crowd-sourced semantic annotations. More specifically, the 3DSA system generates high-level classifications of 3D objects by applying rule-based reasoning across community-generated annotations and low-level shape and size attributes. This paper describ...

متن کامل

The Effects of Multimedia Annotations on Iranian EFL Learners’ L2 Vocabulary Learning

In our modern technological world, Computer-Assisted Language learning (CALL) is a new realm towards learning a language in general, and learning L2 vocabulary in particular. It is assumed that the use of multimedia annotations promotes language learners’ vocabulary acquisition. Therefore, this study set out to investigate the effects of different multimedia annotations (still picture annotatio...

متن کامل

Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling

Annotation is an essential step in the development cycle of many Natural Language Processing (NLP) systems. Lately, crowdsourcing has been employed to facilitate large scale annotation at a reduced cost. Unfortunately, verifying the quality of the submitted annotations is a daunting task. Existing approaches address this problem either through sampling or redundancy. However, these approaches d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1503.06009  شماره 

صفحات  -

تاریخ انتشار 2015